6 research outputs found

    Deep Liquid State Machines with Neural Plasticity and On-Device Learning

    Get PDF
    The Liquid State Machine (LSM) is a recurrent spiking neural network designed for efficient processing of spatio-temporal streams of information. LSMs have several inbuilt features such as robustness, fast training and inference speed, generalizability, continual learning (no catastrophic forgetting), and energy efficiency. These features make LSM’s an ideal network for deploying intelligence on-device. In general, single LSMs are unable to solve complex real-world tasks. Recent literature has shown emergence of hierarchical architectures to support temporal information processing over different time scales. However, these approaches do not typically investigate the optimum topology for communication between layers in the hierarchical network, or assume prior knowledge about the target problem and are not generalizable. In this thesis, a deep Liquid State Machine (deep-LSM) network architecture is proposed. The deep-LSM uses staggered reservoirs to process temporal information on multiple timescales. A key feature of this network is that neural plasticity and attention are embedded in the topology to bolster its performance for complex spatio-temporal tasks. An advantage of the deep-LSM is that it exploits the random projection native to the LSM as well as local plasticity mechanisms to optimize the data transfer between sequential layers. Both random projections and local plasticity mechanisms are ideal for on-device learning due to their low computational complexity and the absence of backpropagating error. The deep-LSM is deployed on a custom learning architecture with memristors to study the feasibility of on-device learning. The performance of the deep-LSM is demonstrated on speech recognition and seizure detection applications

    Lifelong Learning in Spiking Neural Networks Through Neural Plasticity

    No full text
    Lifelong learning, the ability to learn from continually changing data distributions in real-time, is a significant challenge in artificial intelligence. The central issue is that new learning tends to interfere with previously acquired memories, a phenomenon known as catastrophic forgetting. Surrounding this are several other challenges associated with this, such as knowledge transfer and adaptation, few-shot learning, and processing of noisy data. Since humans do not seem to suffer from this problem, researchers have applied biologically inspired techniques to address it, including metaplasticity, synaptic consolidation and memory replay. Although these approaches have seen some success in traditional neural networks, there is limited exploration of how to support lifelong learning in spiking networks. In general, spiking neural networks are efficient for resource-constrained environments due to inherent sparse, asynchronous, and low-precision computation. However, spiking networks require a different set of learning rules from traditional rate-based models. Few works highlight that the application of simple Hebbian rules can support aspects of lifelong learning. However, there is a significant gap in understanding how spiking networks can solve complex tasks experienced in a lifelong learning setting. We propose to address this by using compositional biological mechanisms, where these networks can overcome the limitations of simple Hebbian models. In this work, we present NACHOS, a model that integrates multiple biologically inspired mechanisms to promote lifelong learning at both the synapse-level and the network-level. At the synapse-level, NACHOS uses regularization and homeostatic mechanisms to protect the information learned over time. At the network level, NACHOS introduces heterogeneous learning rules and dynamic architecture to form distributed processing across tasks without impeding learning. These mechanisms work in tandem to significantly boost the performance of baseline spiking networks by 3x in a lifelong learning scenario. Unlike Hebbian approaches, they are also scalable beyond a single-layer. The key features of NACHOS are that (a) it operates without task knowledge, (b) it is evaluated on online continual learning, (c) it does not grow over time, and (d) it has better energy - accuracy trade-offs compared to existing rate-based models. This enables the model to be deployed in the wild, where the AI system is not aware of task switching. NACHOS is demonstrated on several lifelong learning scenarios, where it matches performance of state-of-the-art non-spiking lifelong learning models. In summary, we explore the role of spiking networks in lifelong learning and provide a blueprint for adoption into resource-constrained environments

    Biological underpinnings for lifelong learning machines

    No full text
    Biological organisms learn from interactions with their environment throughout their lifetime. For artificial systems to successfully act and adapt in the real world, it is desirable to similarly be able to learn on a continual basis. This challenge is known as lifelong learning, and remains to a large extent unsolved. In this Perspective article, we identify a set of key capabilities that artificial systems will need to achieve lifelong learning. We describe a number of biological mechanisms, both neuronal and non-neuronal, that help explain how organisms solve these challenges, and present examples of biologically inspired models and biologically plausible mechanisms that have been applied to artificial systems in the quest towards development of lifelong learning machines. We discuss opportunities to further our understanding and advance the state of the art in lifelong learning, aiming to bridge the gap between natural and artificial intelligence
    corecore